27,955 research outputs found

    Comment on “The role of scaling laws in upscaling” by B.D. Wood

    Get PDF
    Comment on the article "The role of scaling laws in upscaling" by B.D. Woo

    Modeling of the northern hemisphere ice sheets during the last glacial cycle and glaciological sensitivity

    Get PDF
    We present a new three-dimensional thermomechanically coupled ice sheet model of the northern hemisphere to reconstruct the Quaternary ice sheets during the last glacial cycle. The model includes basal sliding, internal representations of the surface mass balance, glacial isostasy, and a treatment for marine calving. The time dependent forcing consists of temperature and precipitation anomalies from the UKMO GCM scaled to the GRIP ice core ∂18O record. Model parameters were chosen to best match geomorphological inferences on maximum LGM extent and global eustatic sea level change. For our standard run we find a maximum ice volume of 57 x 106 km3 at 18.5 ka cal BP. This corresponds to a eustatic sea level lowering of 110 m after correction for hydro-isostatic displacement and anomalous ice resulting from defects in the PMIP climatic forcing. Of this 110 m, 82 m was stored in the North American ice sheet and 25 m in the Eurasian ice sheet. We determine the qualitative and quantitative response of the model from a comprehensive sensitivity study in which the most important parameters were varied over their respective ranges of uncertainty. Model outputs comparable to the observational record were explored in detail as a linear function along the axes of parameter space of the reference model. The method reveals the dominance of climate uncertainty when modelling the LGM configuration of the northern hemisphere ice sheets, but also highlights the role of ice rheology and basal processes for aspect ratio, and glacial isostasy and calving for the timing of maximum ice volume

    Peer review: beyond the call of duty?

    Get PDF
    The number of manuscripts submitted to most scholarly journals has increased tremendously over the last few decades, and shows no sign of leveling off. Increasingly, a key challenge faced by editors of scientific journals like the International Journal of Nursing Studies (IJNS) is to secure peer reviews in a timely fashion for the manuscripts they handle. We hear from editors of some journals that it is not uncommon to have to issue 10–15 invitations before one can secure the peer reviews needed to assess a given manuscript and although the IJNS generally fares better than this it is certainly true that a high proportion, probably a majority, of review invitations are declined.\ud \ud Most often, researchers declining invitations to review invoke the fact that they are too busy to add yet another item to their already overcommitted schedule. Some reviewers respond that administrators at their university or research center are actively discouraging them from engaging in an activity that seems to bear no tangible benefits. Yet, however one looks at it, peer reviewing is a crucial component of the publishing process. Nobody has yet come up with a viable alternative. Therefore, we need to find a way to convince our colleagues to peer review manuscripts more often. This can be done with a stick or with various types of carrots.\ud \ud One “stick”, occasionally envisaged by editors (e.g., Anon., 2009), is straightforward, at least to explain. For the peer-reviewing enterprise to function well, each researcher should be reviewing every year as many manuscripts as the number of reviews he or she is getting for his/her own papers. So, someone submitting 10 manuscripts in a given year should be willing to review 20 or 30 manuscripts during the same timeframe (assuming that each manuscript is reviewed by 2 or 3 individuals, as is commonly the case). If this person does not meet the required quota of reviews, there would be some restrictions imposed on the submission of any new manuscript for publication. Boehlert et al. (2009) have advocated such a “stick” in the case of the submission of grant proposals.\ud \ud However, the implementation of such an automatic accounting of reviewing activities is fraught with difficulties. For one thing, it would not prevent reviewers from defeating the system by writing short, useless reviews just to make the number. To eliminate that loophole, someone would have to assess whether reviews meet minimal standards of quality before they can be counted in the annual or running total. There would need to be allowances, for example to allow young researchers to get established in their career. This raises the prospect of a complex and potentially expensive system somewhat akin to carbon trading where credits for reviewing are granted and then traded with a verification system to ensure that no one cheats.\ud \ud An alternative approach, instead of sanctioning bad reviewing practices, would be to reward good ones. Currently the IJNS publishes the names of all reviewers annually. Other journals go a step further for example by giving awards to outstanding reviewers (Baveye et al., 2009). The lucky few who are so singled out by such awards see their reviewing efforts validated. But fundamentally, these awards do not change the unsupportive atmosphere in which researchers review manuscripts. The problem has to be attacked at its root, in the current culture of universities and research centers, where administrators tend to equate research productivity with the number of articles published and the amount of extramural funding brought in. Annual activity reports occasionally require individuals to mention the number of manuscripts or grant proposals reviewed, but these data are currently unverifiable, and therefore, are generally assumed not to matter at all for promotions or salary adjustments.\ud \ud There may be ways out of this difficulty. All the major publishers have information on who reviews what, how long reviewers take to respond to invitations, how long it takes them to send in their reviews. All it would take, in addition, would be for editors or associate editors who receive reviews to assess and record their usefulness, and one would have a very rich data set, which, if it were made available to universities and research centers in a way that preserves the anonymity of the peer-review process, could be used fruitfully to evaluate individuals’ reviewing performance and impact. Of course, one would have to agree on what constitutes a “useful” review. Pointing out typos and syntax errors in a manuscript is useful, but not hugely so. Identifying problems and offering ways to overcome them, proposing advice on how to analyze data better, or editing the text to increase its readability are all ways to make more substantial contributions. Generally, one might consider that there is a usefulness gradation from reviews focused on finding flaws in a manuscript to those focused on helping authors improve their text. Debate among scientists could result in a reliable set of guidelines on how to evaluate peer reviews.\ud \ud Beyond making statistics available to decision makers, other options are also available to raise the level of visibility and recognition of peer reviews (Baveye, 2010). Right or wrong, universities and research centers worldwide now rely more and more on some type of scientometric index, like the h-index (Hirsch, 2005), to evaluate the “impact” of their researchers. In other cases, such as the UK, the basis on which institutions are funded is linked to schemes which have measures such as the impact factor at their core (Nolan et al., 2008 M. Nolan, C. Ingleton and M. Hayter, The research excellence framework (REF): a major impediment to free and informed debate?, International Journal of Nursing Studies 45 (4) (2008), pp. 487–488. Article | PDF (202 K) | View Record in Scopus | Cited By in Scopus (4)Nolan et al., 2008). While many researchers see bibliometric analysis as a legitimate tool to explore discipline's activities and knowledge sources (see for example [Beckstead and Beckstead, 2006], [Oermann et al., 2008] and [Urquhart, 2006]), previous editorials in the IJNS have noted this trend and expressed disquiet at the distorting effect it could have on academic practice when used to pass judgments on quality ([Ketefian and Freda, 2009] and [Nolan et al., 2008]).\ud \ud Many of these indices implicitly encourage researchers to publish more articles, which in turn may detract researchers from engaging in peer reviewing. Certainly, none of the current indices encompass in any way the significant impact individuals can have on a discipline via their peer reviewing. But one could conceive of scientometric indexes that would include some measure of peer-reviewing impact, calculated on the basis of some of the data mentioned earlier. Clearly, such developments will not happen overnight. Before any of them can materialize, a necessary first step is for researchers to discuss with their campus administration, or the managers of their research institution, the crucial importance of peer reviewing and the need to have this activity valued in the same way that research, teaching, and outreach are. A debate along these lines is long overdue.\ud \ud Academic peer review is a necessary part of the publication process but while publication is recognised and valued, peer review is not. Even without the pressures of reward based on publication-based measures there is a potential for those less civic-minded authors to benefit from, but not contribute to, the peer-review system. Current scientometrics actively encourage and reward such behavior in a way that is, ultimately, not sustainable. Once administrators perceive that there is a need in this respect, are convinced that it will not cost a fortune to give peer reviewing more attention, and formulate a clear demand to librarians and publishers to help move things forward, there is hope that this perverse incentive in the current system can be removed. Otherwise the future of the current model of peer review looks bleak and we may indeed have to look forward to a complex bureaucratic system in which review credits are traded.\ud \ud For now, although the IJNS can count itself lucky because the problem affects this journal less than many others, in common with other journals we must thank our peer reviewers who are acting above and beyond the call of duty as it is perceived by many institutions. Without their efforts, journals like this cannot maintain their high standards. It is time for us to lend our weight to calls for a wide-ranging debate in order to ensure that these efforts are properly acknowledged and rewarded when judging the extent and quality of an academic's scientific contribution

    Eye Tracker Accuracy: Quantitative Evaluation of the Invisible Eye Center Location

    Full text link
    Purpose. We present a new method to evaluate the accuracy of an eye tracker based eye localization system. Measuring the accuracy of an eye tracker's primary intention, the estimated point of gaze, is usually done with volunteers and a set of fixation points used as ground truth. However, verifying the accuracy of the location estimate of a volunteer's eye center in 3D space is not easily possible. This is because the eye center is an intangible point hidden by the iris. Methods. We evaluate the eye location accuracy by using an eye phantom instead of eyes of volunteers. For this, we developed a testing stage with a realistic artificial eye and a corresponding kinematic model, which we trained with {\mu}CT data. This enables us to precisely evaluate the eye location estimate of an eye tracker. Results. We show that the proposed testing stage with the corresponding kinematic model is suitable for such a validation. Further, we evaluate a particular eye tracker based navigation system and show that this system is able to successfully determine the eye center with sub-millimeter accuracy. Conclusions. We show the suitability of the evaluated eye tracker for eye interventions, using the proposed testing stage and the corresponding kinematic model. The results further enable specific enhancement of the navigation system to potentially get even better results

    Natural Resource Abundance and Human Capital Accumulation

    Get PDF
    This study examines indicators of human capital accumulation together with data for natural resource abundance and rents in a panel of 102 countries running from 1970 to 1999. Mineral wealth makes a positive and marked difference on human capital accumulation. Matching on observables reveals that cross-country results are not driven by a third factor such as overall economic development. Political stability does seem to affect both human capital accumulation and subsoil wealth, but not enough to overturn my conclusions. Instrumentation reveals that reverse causality running from education to natural resources does not drive the results. Estimation of a panel VAR indicates that, over the three decades, a $1 shock to resource rent generates five cents of extra educational expenditure per year. These results are consistent with Hirschman’s conjecture that enclave economies have weaker production leakages but stronger government revenue linkages than other activities. The “wealth channel” identified in this paper implies that caution should be exerted when discouraging countries from exploiting their mineral wealth, especially for countries where human capital is scarce.education, natural resources, resource booms, economic development

    Natural Resource Abundance and Human Capital Accumulation

    Get PDF
    This study examines indicators of human capital accumulation together with data for natural resource abundance and rents in a panel of 102 countries running from 1970 to 1999. Mineral wealth makes a positive and marked difference on human capital accumulation. Matching on observables reveals that cross-country results are not driven by a third factor such as overall economic development. Political stability does seem to affect both human capital accumulation and subsoil wealth, but not enough to overturn my conclusions. Instrumentation reveals that reverse causality running from education to natural resources does not drive the results. Estimation of a panel VAR indicates that, over the three decades, a $1 shock to resource rent generates five cents of extra educational expenditure per year. These results are consistent with Hirschman's conjecture that enclave economies have weaker production leakages but stronger government revenue linkages than other activities. The "wealth channel" identified in this paper implies that caution should be exerted when discouraging countries from exploiting their mineral wealth, especially for countries where human capital is scarce.Labor and Human Capital, Resource /Energy Economics and Policy,
    • …
    corecore